home *** CD-ROM | disk | FTP | other *** search
- Path: news.clark.net!usenet
- From: budge@clark.net (Joe Budge)
- Newsgroups: comp.lang.c++
- Subject: HELP! GPFs under BC4.x w/ arrays near size_t
- Date: 3 Apr 1996 15:07:10 GMT
- Organization: Clark Internet Services, Inc.
- Message-ID: <4ju46u$qks@clarknet.clark.net>
- NNTP-Posting-Host: budge.clark.net
- Mime-Version: 1.0
- X-Newsreader: WinVN 0.93.14
-
- I am debugging several routines which use dynamic arrays of
- single-byte data (char is one example). I manage array allocation
- with the standard new and delete operators. Simple routines which
- otherwise work fine start throwing GPF's or running into blown
- pointers and heap corruption when the array size begins to approach
- size_t under 16-bit Windows (64K). By "simple" I mean lines like
- "x = myarray[i];", which in one routine blows up if i >= UINT_MAX - 64.
- This behavior happens under both BC 4.0 and 4.53.
-
- The only clue I've been able to dig up is from the Programmer's Guide
- which, in the chapter on Windows programming, says that "Each global
- memory block carries an overhead of at least 20 bytes." "At least"
- is a bit vague...
-
- Since a global memory block can be up to 1MB (I'm compiling for
- a 286 target), it doesn't seem to make sense that this overhead
- should come out of the hide of my array. Nevertheless, I'm pretty
- sure that my arrays are running into _something_ before they get
- to size_t and I haven't been able to figure out what. And that means
- I don't know how far away from size_t I need to be in order to stay
- safe.
-
- If anyone can offer any insight into this issue I would greatly
- appreciate it.
-
- Thanks,
- Joe Budge
-
-